New Year, New You, New Heights. 🥂🍾 Kick Off 2024 with 70% OFF!
I WANT IT! 🤙Operation Rescue is underway: 70% OFF on 12Min Premium!
New Year, New You, New Heights. 🥂🍾 Kick Off 2024 with 70% OFF!
This microbook is a summary/original review based on the book: Life 3.0: Being Human in the Age of Artificial Intelligence
Available for: Read online, read in our mobile apps for iPhone/Android and send in PDF/EPUB/MOBI to Amazon Kindle.
ISBN: 1101946598
Publisher: Vintage
Is artificial intelligence the best or the worst thing that can happen to humanity? Well-known Swedish American physicist and AI researcher Max Tegmark investigates this question in his spellbinding 2017 book, “Life 3.0.” So, get ready to speculate with him what lies ahead – and what may be at stake.
Defining life is notoriously a difficult problem. However, for his book, Tegmark chooses the broadest definition possible and describes life as “a self-replicating information-processing system whose information (software, DNA) determines both its behavior and the blueprints for its hardware (cell, body).”
The only place in the vast universe where this kind of process is known to occur is our planet. And, ever since it first appeared 4 billion years ago, it’s been constantly evolving and growing more complex. To make some things clearer, Tegmark shrugs off traditional biological taxonomy and starts off his discussion on humans in the age of AI by classifying all possible life-forms into three very broad groups, according to their levels of sophistication:
The universe originated 13.8 billion years ago, and Life 1.0 first appeared a staggering 10 billion years later. It took Life 1.0 about 4 billion years to evolve into Life 2.0, and then, in no more than 300,000 years, humans arrived on the brink of fashioning future 3.0 life-forms.
“On the brink,” however, is only true in cosmological terms and doesn’t really mean much any other way. It is neither inevitable nor impossible: it may happen in decades, centuries, or never. The idea that we know the answer to when 3.0 life-forms will appear is one of the few most common AI myths and misconceptions. Another myth is that only Luddites worry about AI – many top AI researchers are also concerned. And yet a third myth is that AI will never be able to control humans. Just like we control tigers solely because we’re smarter, AI should be able to control us once it becomes more intelligent than us. Intelligence enables control.
Although there are many other unknowns, the conversation around 3.0 life-forms among world-leading experts essentially centers around two questions: when (if ever) will it happen and what it will mean for humanity. Depending on their views, AI enthusiasts are into one of three groups:
It’s important to note straight away that neither the most passionate techno-skeptics question the potential of AI to profoundly influence our lives in the very near future. Regardless of when – or even if – AI will reach a human level for all skills (AGI), it won’t take long before narrow AI irretrievably changes how we live our lives and deal with some of our most pressing issues if progress continues at the current rate. We’re not merely talking about self-driving cars and surgical bots – we’re talking about more just and more equal societies as well.
For example, as advanced as our legal systems are relative to those of previous times, they are still fraught with innumerable problems. Though human intelligence is remarkably broad, humans are fallible and biased. On the other hand, as incipient as today’s AI is, it is immensely better at solving delineated computational problems than us. If viewed abstractly, the legal system is nothing more but a complex computational problem, the input being information about evidence, laws, and the output - an appropriate decision. Unlike humans, the AI-based “robojudges” of the future should be able to “tirelessly apply the same high legal standards to every judgment without succumbing to human errors such as bias, fatigue, or lack of the latest knowledge.”
It’s not even a question anymore whether AI can make our electric power systems more efficient, but, theoretically, this should also be true for something far more important: economic order. AI is already replacing humans on the job market, and it may even be soon capable of distributing wealth more justly. Moreover, AI-powered drones and other autonomous weapon systems (AWS) can even make wars more humane: “if wars consist merely of machines fighting machines, then no human soldiers or civilians need to get killed.”
Unfortunately, all of this comes with caveats. For example, “our laws need rapid updating to keep up with AI, which poses tough legal questions involving privacy, liability, and regulation.” Just as well, if the AI-created wealth doesn’t get redistributed, then inequality will greatly increase in the low-employment society of the future. In such a society, few things can be worse than AWS, “available to everybody with a full wallet and an ax to grind,” reminds the author. “When we allow real-world systems to be controlled by AI,” Tegmark warns, “it’s crucial that we learn to make AI more robust, doing what we want it to do. This boils down to solving tough technical problems related to verification, validation, security, and control.”
Unless regulated, AGI and subsequent 3.0 life-forms can take over the world. As far-fetched as this might look, the three steps that separate us from such future are both logical and consequential:
Of the three steps, the first seems most challenging. But once we build AGI – “the holy grail of AI research” – the resulting machine should be “capable enough to recursively design ever-better AGI” and cause an intelligence explosion – leading to the second step. However, there is absolutely no consensus on whether that will happen and where it might leave us as humans. AI experts have postulated and debated several possible scenarios for the third step that can be grouped into three broader categories:
“The climax of our current race toward AI may be either the best or the worst thing ever to happen to humanity,” notes Tegmark. He also warns, hauntingly, that we must take into consideration all possible outcomes, and “start thinking hard about which outcome we prefer and how to steer in that direction. Because if we don’t know what we want, we’re unlikely to get it.”
Selected as one of Barack Obama’s favorite books of 2018, Max Tegmark’s “Life 3.0” is a riveting and thought-provoking book on “the most important conversation of our time.” It is also, in the words of Elon Musk, “a compelling guide to the challenges and choices in our quest for a great future of life, intelligence and consciousness – on Earth and beyond.” A must-read.
Though AGI is (at least) decades away from being a reality, AI is already transforming the world. Consequently, Tegmark’s – and our – career advice for today’s kids: “Go into professions that machines are bad at – those involving people, unpredictability, and creativity.”
By signing up, you will get a free 7-day Trial to enjoy everything that 12min has to offer.
Max Tegmark is a Swedish American physicist, cosmologist, and AI scholar. A professor at the Massachusetts Institute of Technology and the scientific director of the Foundational Questi... (Read more)
Total downloads
on Apple Store and Google Play
of 12min users improve their reading habits
Grow exponentially with the access to powerful insights from over 2,500 nonfiction microbooks.
Start enjoying 12min's extensive library
Don't worry, we'll send you a reminder that your free trial expires soon
Free Trial ends here
Get 7-day unlimited access. With 12min, start learning today and invest in yourself for just USD $4.14 per month. Cancel before the trial ends and you won't be charged.
Start your free trialNow you can! Start a free trial and gain access to the knowledge of the biggest non-fiction bestsellers.